16 research outputs found

    ROBUSTNESS AND PREDICTION ACCURACY OF MACHINE LEARNING FOR OBJECTIVE VISUAL QUALITY ASSESSMENT

    Get PDF
    Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reliability of ML-based techniques within objective quality assessment metrics is often questioned. In this study, the robustness of ML in supporting objective quality assessment is investigated, specifically when the feature set adopted for prediction is suboptimal. A Principal Component Regression based algorithm and a Feed Forward Neural Network are compared when pooling the Structural Similarity Index (SSIM) features perturbed with noise. The neural network adapts better with noise and intrinsically favours features according to their salient content

    Immersion and togetherness: How live visualization of audience engagement can enhance music events

    Get PDF
    This paper evaluates the influence of an additional visual aesthetic layer on the experience of concert goers during a live event. The additional visual layer incorporates musical features as well as bio-sensing data collected during the concert, which is coordinated by our audience engagement monitoring technology. This technology was used during a real Jazz concert. The collected measurements were used in an experiment with 32 participants, where two different forms of visualization were compared: one factoring in music amplitude, audience engagement collected by the sensors and the dynamic atmosphere of the event, the other one purely relying on the beat of the music. The findings indicate that the visual layer could add value to the experience if used during a live concert, providing a higher level of immersion and feeling of togetherness among the audience

    Understanding the role of social context and user factors in video quality of experience

    No full text
    \u3cp\u3eQuality of Experience is a concept to reflect the level of satisfaction of a user with a multimedia content, service or system. So far, the objective (i.e., computational) approaches to measure QoE have been mostly based on the analysis of the media technical properties. However, recent studies have shown that this approach cannot sufficiently estimate user satisfaction, and that QoE depends on multiple factors, besides the media technical properties. This paper aims to identify the role of social context and user factors (such as interest and demographics) in determining quality of viewing experience. We also investigate the relationships between social context, user factors and some media technical properties, the effect of which on image quality is already known (i.e., bitrate level and video genre). Our results show that the presence of co-viewers increases the user's level of enjoyment and enhances the endurability of the experience, and so does interest in the video content. Furthermore, although participants can clearly distinguish the various levels of video quality used in our study, these do not affect any of the other aspects of QoE. Finally, we report an impact of both gender and cultural background on QoE. Our results provide a first step toward building an accurate model of user QoE appreciation, to be deployed in future multimedia systems to optimize the user experience.\u3c/p\u3

    Predicting mood from punctual emotion annotations on videos

    No full text
    \u3cp\u3eA smart environment designed to adapt to a user's affective state should be able to decipher unobtrusively that user's underlying mood. Great effort has been devoted to automatic punctual emotion recognition from visual input. Conversely, little has been done to recognize longer-lasting affective states, such as mood. Taking for granted the effectiveness of emotion recognition algorithms, we propose a model for estimating mood from a known sequence of punctual emotions. To validate our model experimentally, we rely on the human annotations of two well-established databases: the VAM and the HUMAINE. We perform two analyses: the first serves as a proof of concept and tests whether punctual emotions cluster around the mood in the emotion space. The results indicate that emotion annotations, continuous in time and value, facilitate mood estimation, as opposed to discrete emotion annotations scattered randomly within the video timespan. The second analysis explores factors that account for the mood recognition from emotions, by examining how individual human coders perceive the underlying mood of a person. A moving average function with exponential discount of the past emotions achieves mood prediction accuracy above 60 percent, which is higher than the chance level and higher than mutual human agreement.\u3c/p\u3

    Towards a comprehensive model for predicting the quality of individual visual experience

    No full text
    \u3cp\u3eRecently, a lot of effort has been devoted to estimating the Quality of Visual Experience (QoVE) in order to optimize video delivery to the user. For many decades, existing objective metrics mainly focused on estimating the perceived quality of a video, i.e., the extent to which artifacts due to e.g. compression disrupt the appearance of the video. Other aspects of the visual experience, such as enjoyment of the video content, were, however, neglected. In addition, typically Mean Opinion Scores were targeted, deeming the prediction of individual quality preferences too hard of a problem. In this paper, we propose a paradigm shift, and evaluate the opportunity of predicting individual QoVE preferences, in terms of video enjoyment as well as perceived quality. To do so, we explore the potential of features of different nature to be predictive for a user's specific experience with a video. We consider thus not only features related to the perceptual characteristics of a video, but also to its affective content. Furthermore, we also integrate in our framework the information about the user and use context. The results show that effective feature combinations can be identified to estimate the QoVE from the perspective of both the enjoyment and perceived quality.\u3c/p\u3

    The aesthetic appeal of depth of field in photographs

    No full text
    \u3cp\u3eWe report here how depth of field (DOF) affects the aesthetic appeal of photographs for different content categories. 339 photographs spanning eight categories were selected from Flickr, Google+, and personal collections. First, we classified the 339 photographs into three levels of depth of field: small, medium, and large. Then, we asked participants to rate the aesthetic appeal of these photographs in random order. We found that aesthetic appeal is only influenced significantly by the content category and by depth of field for animal and sport related photographs. Therefore, we conclude that depth of field should not be regarded as a common criterion for judging aesthetic appeal in different semantic content categories.\u3c/p\u3
    corecore